48 research outputs found

    Developing a user-centred Communication Pad for Cognitive and Physical Impaired People

    Get PDF

    Stabilising touch interactions in cockpits, aerospace, and vibrating environments

    Get PDF
    © Springer International Publishing AG, part of Springer Nature 2018. Incorporating touch screen interaction into cockpit flight systems is increasingly gaining traction given its several potential advantages to design as well as usability to pilots. However, perturbations to the user input are prevalent in such environments due to vibrations, turbulence and high accelerations. This poses particular challenges for interacting with displays in the cockpit, for example, accidental activation during turbulence or high levels of distraction from the primary task of airplane control to accomplish selection tasks. On the other hand, predictive displays have emerged as a solution to minimize the effort as well as cognitive, visual and physical workload associated with using in-vehicle displays under perturbations, induced by road and driving conditions. This technology employs gesture tracking in 3D and potentially eye-gaze as well as other sensory data to substantially facilitate the acquisition (pointing and selection) of an interface component by predicting the item the user intents to select on the display, early in the movements towards the screen. A key aspect is utilising principled Bayesian modelling to incorporate and treat the present perturbation, thus, it is a software-based solution that showed promising results when applied to automotive applications. This paper explores the potential of applying this technology to applications in aerospace and vibrating environments in general and presents design recommendations for such an approach to enhance interactions accuracy as well as safety

    A new framework for sign language alphabet hand posture recognition using geometrical features through artificial neural network (part 1)

    Get PDF
    Hand pose tracking is essential in sign languages. An automatic recognition of performed hand signs facilitates a number of applications, especially for people with speech impairment to communication with normal people. This framework which is called ASLNN proposes a new hand posture recognition technique for the American sign language alphabet based on the neural network which works on the geometrical feature extraction of hands. A user’s hand is captured by a three-dimensional depth-based sensor camera; consequently, the hand is segmented according to the depth analysis features. The proposed system is called depth-based geometrical sign language recognition as named DGSLR. The DGSLR adopted in easier hand segmentation approach, which is further used in segmentation applications. The proposed geometrical feature extraction framework improves the accuracy of recognition due to unchangeable features against hand orientation compared to discrete cosine transform and moment invariant. The findings of the iterations demonstrate the combination of the extracted features resulted to improved accuracy rates. Then, an artificial neural network is used to drive desired outcomes. ASLNN is proficient to hand posture recognition and provides accuracy up to 96.78% which will be discussed on the additional paper of this authors in this journal

    Associating Facial Expressions and Upper-Body Gestures with Learning Tasks for Enhancing Intelligent Tutoring Systems

    Get PDF
    Learning involves a substantial amount of cognitive, social and emotional states. Therefore, recognizing and understanding these states in the context of learning is key in designing informed interventions and addressing the needs of the individual student to provide personalized education. In this paper, we explore the automatic detection of learner’s nonverbal behaviors involving hand-over-face gestures, head and eye movements and emotions via facial expressions during learning. The proposed computer vision-based behavior monitoring method uses a low-cost webcam and can easily be integrated with modern tutoring technologies. We investigate these behaviors in-depth over time in a classroom session of 40 minutes involving reading and problem-solving exercises. The exercises in the sessions are divided into three categories: an easy, medium and difficult topic within the context of undergraduate computer science. We found that there is a significant increase in head and eye movements as time progresses, as well as with the increase of difficulty level. We demonstrated that there is a considerable occurrence of hand-over-face gestures (on average 21.35%) during the 40 minutes session and is unexplored in the education domain. We propose a novel deep learning approach for automatic detection of hand-over-face gestures in images with a classification accuracy of 86.87%. There is a prominent increase in hand-over-face gestures when the difficulty level of the given exercise increases. The hand-over-face gestures occur more frequently during problem-solving (easy 23.79%, medium 19.84% and difficult 30.46%) exercises in comparison to reading (easy 16.20%, medium 20.06% and difficult 20.18%)

    Calcium orthophosphate-based biocomposites and hybrid biomaterials

    Full text link

    Making Object Detection Available to Everyone - A Hardware Prototype for Semi-automatic Synthetic Data Generation

    No full text
    The capabilities of object detection are well known, but many projects don’t use them, despite potential benefit. Even though the use of object detection algorithms is facilitated through frameworks and publications, a big issue is the creation of the necessary training data. To tackle this issue, this work shows the design and evaluation of a prototype, which allows users to create synthetic datasets for object detection in images. The prototype is evaluated using YOLOv3 as the underlying detector and shows that the generated datasets are equally good in quality as manually created data. This encourages a wide adoption of object detection algorithms in different areas, since image creation and labeling is often the most time consuming step

    Gesture modelling and recognition by integrating declarative models and pattern recognition algorithms

    No full text
    Gesture recognition approaches based on computer vision and machine learning mainly focus on recognition accuracy and robustness. Research on user interface development focuses instead on the orthogonal problem of providing guidance for performing and discovering interactive gestures, through compositional approaches that provide information on gesture sub-parts. We make a first step toward combining the advantages of both approaches. We introduce DEICTIC, a compositional and declarative gesture description model which uses basic Hidden Markov Models (HMMs) to recognize meaningful pre-defined primitives (gesture sub-parts), and uses a composition of basic HMMs to recognize complex gestures. Preliminary empirical results show that DEICTIC exhibits a similar recognition performance as “monolithic” HMMs used in state-of-the-art vision-based approaches, retaining at the same time the advantages of declarative approaches

    Industry 4.0 Visions and Reality- Status in Norway

    No full text
    Part 5: Industry 4.0 ImplementationsInternational audienceThe concept and vision of Industry 4.0 has been around for almost a decade and gain a lot of momentum and attraction globally. Central to the vision of Industry 4.0 is the concept of a “Cyber-Physical system”, linking the IT elements of an enterprise (Cyber) with the physical system (man and machine) of an enterprise. This vision is well known and promoted as crucial in radically transforming todays manufacturing industry. While there is a plethora of papers and studies of the various “cyber” aspects, the concept, visions, benefits as well as the downsides of Industry 4.0, few papers have much to say about the actual implementation. Based on a digital maturity mapping of ten front line manufacturing enterprises in Norway this paper analyses implementation at shop floor level of both cyber and physical system and their interaction. From the survey data a clear picture emerges of the development of a cyber system, as well as worker usage and benefit of the system. However, the two systems don’t interact very well, worker interaction is limited to plain old keyboard usage, instead of employing more mobile, handsfree, voice based or similar interaction methods. Currently there is no cyber-physical system, rather a burgeoning cyber system poorly linked to the physical world. If the cyber-physical system is to be realized there is a need for a rethinking and upgrading of man-machine interaction
    corecore